LogDet Divergence based Metric Learning using Triplet Labels
نویسندگان
چکیده
Metric learning is fundamental to lots of learning algorithms and it plays significant roles in many applications. In this paper, we present a LogDet divergence based metric learning approach to a learn Mahalanobis distance over input space of the instances. In the proposed model, the most natural constraint triplets are used as the labels of the training samples. Meanwhile, in order to avoid overfitting problem, the model uses the LogDet divergence to regularize the obtained Mahalanobis matrix as close as possible to a given matrix. Besides, a cyclic iterative algorithm is presented to solve the objective function and accelerate the metric learning process. Furthermore, this paper constructs a novel dynamic triplets building strategy to guarantee that the most useful triplets are used in every training cycle. Experiments on benchmark data sets demonstrates the proposed model achieve an improved performance when compared with the state-of-theart methods.
منابع مشابه
Learning Discriminative αβ-Divergences for Positive Definite Matrices
Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possibl...
متن کاملLearning Discriminative Alpha-Beta-divergence for Positive Definite Matrices (Extended Version)
Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possibl...
متن کاملOnline Linear Regression using Burg Entropy
We consider the problem of online prediction with a linear model. In contrast to existing work in online regression, which regularizes based on squared loss or KL-divergence, we regularize using divergences arising from the Burg entropy. We demonstrate regret bounds for our resulting online gradient-descent algorithm; to our knowledge, these are the first online bounds involving Burg entropy. W...
متن کاملMetric and Kernel Learning Using a Linear Transformation
Metric and kernel learning arise in several machine learning applications. However, most existing metric learning algorithms are limited to learning metrics over low-dimensional data, while existing kernel learning algorithms are often limited to the transductive setting and do not generalize to new data points. In this paper, we study the connections between metric learning and kernel learning...
متن کاملComposite Kernel Optimization in Semi-Supervised Metric
Machine-learning solutions to classification, clustering and matching problems critically depend on the adopted metric, which in the past was selected heuristically. In the last decade, it has been demonstrated that an appropriate metric can be learnt from data, resulting in superior performance as compared with traditional metrics. This has recently stimulated a considerable interest in the to...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013